box embedding model
BoxE: A Box Embedding Model for Knowledge Base Completion
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB). A promising approach for KBC is to embed knowledge into latent spaces and make predictions from learned embeddings. However, existing embedding models are subject to at least one of the following limitations: (1) theoretical inexpressivity, (2) lack of support for prominent inference patterns (e.g., hierarchies), (3) lack of support for KBC over higher-arity relations, and (4) lack of support for incorporating logical rules. Here, we propose a spatio-translational embedding model, called BoxE, that simultaneously addresses all these limitations. BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes), which spatially characterize basic logical properties. This seemingly simple abstraction yields a fully expressive model offering a natural encoding for many desired logical properties. BoxE can both capture and inject rules from rich classes of rule languages, going well beyond individual inference patterns. By design, BoxE naturally applies to higher-arity KBs. We conduct a detailed experimental analysis, and show that BoxE achieves state-of-the-art performance, both on benchmark knowledge graphs and on more general KBs, and we empirically show the power of integrating logical rules.
Review for NeurIPS paper: BoxE: A Box Embedding Model for Knowledge Base Completion
Additional Feedback: Please number ALL equations for easy reference, at least in the preliminary submission. L139 Translational bumps are certainly very expressive, but a likely first reaction is that they are too expressive. Perhaps you need a couple sentences right here on how you control their power. L153 "for the sample KG, there are 4 2 potential configurations" There are four entities and two binary relations. For each relation, each slot can be occupied by any one of four entities (assuming selectively reflexive and symmetric relations allowed).
Review for NeurIPS paper: BoxE: A Box Embedding Model for Knowledge Base Completion
The paper aims to improve knowledge base modelling. In this regards, authors propose a rather ingenious use of box embeddings as the latent representation for the relations. Specifically, each n-ary relation is represented by n boxes and each entity is represented by two vectors. Having a pair of vectors is very powerful, as they allow us to model complex interactions across entities. In particular authors show how their proposed box embeddings can simultaneously handle symmetry, asymmetry, anti-symmetry, and transitivity. No previous framework is claimed to be as flexible nor capable of handling all these patterns.
BoxE: A Box Embedding Model for Knowledge Base Completion
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB). A promising approach for KBC is to embed knowledge into latent spaces and make predictions from learned embeddings. However, existing embedding models are subject to at least one of the following limitations: (1) theoretical inexpressivity, (2) lack of support for prominent inference patterns (e.g., hierarchies), (3) lack of support for KBC over higher-arity relations, and (4) lack of support for incorporating logical rules. Here, we propose a spatio-translational embedding model, called BoxE, that simultaneously addresses all these limitations. BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes), which spatially characterize basic logical properties.